专利摘要:
Methods and apparatus are provided for sizing an object using nearby devices to obtain image data representative of the object from multiple perspectives. An exemplary method includes recording, by a first image pickup device, first image data representative of an object from a first perspective; determining whether a second imaging device is in proximity to the first imaging device; and when the second image pickup device is in proximity to the first image pickup device, sending a request to the second image pickup device for second image data representative of the object from a second perspective, the first image data and the second image data being combinable to form a composite representation of the object.
公开号:BE1025917A9
申请号:E20185781
申请日:2018-11-07
公开日:2019-12-17
发明作者:Michael J Giannetta;Matthew Louis Kowalski;Charles Burton Swope;Nicole Daphne Tricoukes
申请人:Symbol Technologies Llc;
IPC主号:
专利说明:

METHODS AND DEVICES FOR DIMENSIONING AN OBJECT USING NEAR DEVICES BACKGROUND
In a stock environment, such as a retail store, a warehouse, a shipping facility, etc., it is useful to know the dimensions of an object, such as a box.
SUMMARY OF THE INVENTION
According to the invention, a computer-implemented method is provided for using nearby image recording devices to record image data representative of an object from multiple perspectives, the method comprising recording, by a first image recording device, first image data representative for an object from a first perspective, determining, by a processor, whether a second image capture device is in the vicinity of the first image capture device; and when the second image capture device is in the vicinity of the first image capture device, sending a request to the second image capture device for second image data representative of the object from a second perspective, the first image data and the second image data being combinable to form a composite to represent the object.
The first image recording device may, for example, be a heads-up display assembly.
More preferably, the first image capture device may be a first heads-up display assembly associated with a first user, and the second image capture device may be a second heads-up display assembly associated with a second user.
BE2018 / 5781
For example, the first image capture device may be a mobile device associated with a first user, and the second image capture device may be a stationary device.
The computer-implemented method may further include receiving a user request to record image data representative of the object, and activating the first image recording device to record first image data based on the user request.
The computer-implemented method may further detect, by a sensor, that the object is currently in a target location or within a target range for dimensioning, and activate, by the processor, the first image capture device to record first image data based on data is the object in the target location or the target range.
In the computer-implemented method, it can determine whether the second image capture device is in the vicinity of the first image capture device, receive, by a processor, the first image capture device from data indicative of a position of the first image capture device, receiving, by a processor, of the second image capture device of data indicative of a position of the second image capture device, calculating, by a processor based on the position of the first image capture device and the position of the second image capture device, a distance between the first image capture device and the second image capture device, and determining, by a processor, whether the calculated distance between the first image capture device and the second image capture device is within a proximity distance threshold.
In the computer implemented method, it can determine whether the second image capture device is in the vicinity of the first
BE2018 / 5781 image recording device also receiving, by a processor from a fixed RFID reader, an indication that the fixed RFID reader has received an RF signal sent by the first image recording device, determining, by a processor, based on the indication from the fixed RFID reader, from a position of the first image capture device, receiving, from a fixed RFID reader, a processor that the fixed RFID reader has received an RFID signal sent by the second image capture device, determining , by a processor, based on the indication of the fixed RFID reader, of the position of the second image capture device, calculating, by a processor, based on the position of the first image capture device and the position of the second image capture device from a distance between the first image capture device and the second image capture device, and determining, by a processor, whether the calculated distance between the first the image recording device and the second image recording device is within a proximity distance threshold.
In the computer-implemented method, determining whether the second image capture device is in the vicinity of the first image capture device may also detect, by the first image capture device, a short-range wireless signal sent by the second image capture device, and determine based on the detected short-range wireless signal that the second image capture device is in the vicinity of the first image capture device.
The computer-implemented method can advantageously further comprise combining the first image data and the second image data to make a three-dimensional point cloud, and dimensioning the object using the three-dimensional point cloud.
BE2018 / 5781
According to another aspect of the invention, there is provided a system for using nearby image recording devices for recording image data representative of an object from multiple perspectives, the system comprising a plurality of image recording devices, a memory configured to be stored by the computer executable instructions, and a processor configured to interact with the plurality of image capture devices and memory, and configured to execute computer executable instructions to cause the processor to record by a first image capture device , determining first image data representative of an object from a first perspective, determining whether a second image capture device is in the vicinity of the first image capture device, and when the second image capture device is in the vicinity of the first image capture device, sending a request to the second bee A recording device for second image data representative of the object from a second perspective, the first image data and the second image data being combinable to form a composite representation of the object.
The first image recording device may, for example, be a heads-up display assembly.
The first image recording device may be a first heads-up display assembly associated with a first user, and the second image recording device may be a second heads-up display assembly associated with a second user
The first image recording device may be, for example, a mobile device, and the second image recording device may, for example, be stationary.
BE2018 / 5781
The computer executable instructions, when executed, may preferably cause the processor to receive a user request to record image data representative of the object; and activating the first image recording device to record the first image data based on the user request.
The processor may be configured to interact with a sensor, and the computer-executable instructions, when executed, may preferably cause the processor to detect, by the sensor, that the object is currently in a target location or within a target range for dimensioning; and activating the first image recording device to record the first image data based on that object is in a target location or target range.
The computer-executable instructions, when executed, may also cause the processor to receive, from the first image-recording device, data indicative of a position of the first image-recording device, receive, from the second image-recording device, data indicative of for a position of the second image capture device, calculating, based on the position of the first image capture device and the position of the second image capture device, a distance between the first image capture device and the second image capture device, and determining whether the calculated distance between the first image capture device and the second image capture device is within a proximity distance threshold.
The computer executable instructions may further, when executed, cause the processor to receive, from a fixed RFID reader, an indication that the fixed RFID reader has received an RF signal sent by the first image capture device, determining, on basis of the fixed indication
BE2018 / 5781
RFID reader, from a position of the first image capture device, receiving, from the fixed RFID reader, an indication that the fixed RFID reader has received an RF signal sent by the second image capture device, determining based on the indication of the fixed RFID reader, from the position of the second image capture device, calculating, based on the position of the first image capture device and the position of the second image capture device, a distance between the first image capture device and the second image capture device, and determining whether the calculated distance between the first image capture device and the second image capture device is within a proximity distance threshold.
The computer-executable instructions may further, when executed, cause the processor to detect, by the first image capture device, a short-range wireless signal sent by the second image capture device, and determine based on the detected short-range wireless signal, that the second image capture device is in the vicinity of the first image capture device.
The instructions executable by the computer, when executed, may also cause the processor to combine the first image data and the second image data to make a three-dimensional point cloud, and to dimension the object using the three-dimensional point cloud.
BRIEF FIGURE DESCRIPTION OF THE DIFFERENT VIEWS OF THE DRAWINGS
The accompanying figures, where identical reference numbers refer to identical or functionally comparable elements in the different
BE2018 / 5781 views, together with the detailed description below, are included in and form part of the description, and serve to further illustrate embodiments of concepts disclosed herein, and explain various principles and advantages of those embodiments.
FIG. 1 is a block diagram of an exemplary HUD assembly constructed in accordance with the teachings of this description
FIGs. 2A and 2B illustrate an exemplary HUD assembly that the exemplary HUD assembly of FIG. 1 can implement.
FIG. 3 illustrates the exemplary HUD assembly of FIGs. 2A and 2B mounted on a user's head.
FIG. 4 illustrates sample cameras attached to an sample HUD assembly.
FIG. 5A illustrates a user carrying an example HUD assembly to which cameras are attached where a user looks at an example box object to be dimensioned.
FIG. 5B illustrates multiple users wearing sample HUD assemblies to which cameras are attached, with each user looking at a sample box object to be dimensioned.
FIG. 50 illustrates a sample box object to be dimensioned within the field of view of a mounted sample camera, and a user who carries a sample HUD assembly to which cameras are attached, the user looking at the sample box object to be dimensioned.
BE2018 / 5781
FIG. 6 is a block diagram representative of an exemplary logic circuit configured in accordance with the teachings of this description.
FIG. 7 is a flowchart of an exemplary method described herein for using nearby image recording devices to record image data representative of an object from multiple perspectives.
DETAILED FIGURE DESCRIPTION
Systems and methods are provided for dimensioning an object by using nearby image capture devices to obtain image data representative of the object from multiple perspectives. Although image data from a single image capture device from a single perspective may be sufficient to determine one or more dimensions (dimensions) of an object, dimensioning operations (e.g., calculations) are enhanced by the availability of additional image data, e.g., from different perspectives. That is, information indicative of different sides of the object is useful in determining the dimensions of an object that appears in image data. Examples disclosed herein detect that one or more secondary image capture devices are in the vicinity (e.g., within a threshold distance) of a primary image capture device. In response to such detection (s), examples described herein request that the nearby image capture devices obtain time-stamped image data from the object such that the associated additional information can be used to form, for example, a composite representation of the object that is more granular then the representation from a single perspective. The additional information
BE2018 / 5781 contained in the representation improves the accuracy and speed of the dimensioning operations.
FIG. 1 is a block diagram of an exemplary HUD assembly 100 constructed in accordance with the teachings of this description. Alternative implementations of the exemplary HUD assembly 100 of FIG. 1 include one or more additional or alternative elements, processes, and / or devices. In some examples, one or more of the elements, processes, and / or devices of the example HUD assembly 100 of FIG. 1 are combined, separated, rearranged, or omitted. Although examples described herein are described in conjunction with a HUD assembly as a primary imaging device, examples described herein may be used in additional or alternative devices, such as a handheld mobile computer equipped with a screen and cameras (e.g., stereoscopic cameras and / or depth sensors) that are suitable for obtaining image data and dimensioning an object. As used herein, image data refers to any suitable type of data that can be used to dimension an object. For example, in some examples, the recorded image data is a two-dimensional image, such as an image and / or a pair of images recorded by stereoscopic cameras. In some examples, the recorded image data is a depth value at a coordinate. In some examples, the recorded image data is a combination of an RGB value on a coordinate and a depth value on the coordinate, sometimes referred to as a voxel.
The exemplary HUD assembly 100 of FIG. 1 includes a presentation generator 102 and a main mount 104. The main mount 104 is constructed to attach the presentation generator 102 to a person's head such that a presentation that
BE2018 / 5781 is generated by the presentation generator is consumable for the person. The presentation includes visual media components (e.g., images) and / or audio media components. To generate images such as static or animated text and / or graphs, the presentation generator 102 of FIG. 1 comprises an image generator 106. The exemplary image generator 106 of FIG. 1 is connected to one or more sources of image data. The image data received at the image generator 106 is representative of, for example, text, graphs and / or augmented reality elements (e.g. information that overlaps with objects within the field of view).
In the illustrated example, the presentation generator 102 is configured to display one or more messages or indicators associated with, for example, requests to near-image recording devices sent by the HUD assembly 100, the requests for the near-image recording devices being time stamped obtain image data representative of an object held by a user of the HUD assembly 100. Additionally or alternatively, the presentation generator 102 may be configured to display one or more messages or indicators that one or more nearby image recording devices may have accepted the request and / or recorded the requested image data.
The example image generator 106 in FIG. 1 uses a light source 108 and a screen / optic 110 to display visual components of the presentation. In some examples, the example image generator 106 includes exposure units that use the light source 108 (e.g., light emitting diodes (LEDs)) to generate light based on the received data. In some examples, the exposure units receive processed data in a state for direct conversion to light. In other examples, the exposure units process raw
BE2018 / 5781 image data before the image data is converted to light. To perform such processing, for example, the exposure units and / or the exposure units are in communication with one or more logic circuits that are configured to process the image data.
The exposure units convert the received image data into patterns and pulses of light, and communicate the generated light to the screen / optic 110, such that the images associated with the received data are displayed to the user via the screen / optic 110. In some examples, the exposure units include optics that condition or manipulate the generated light (e.g., polarize and / or collimate) prior to the screen / optic 110 provided with the light.
In some examples, the screen / optic 110 includes a waveguide that guides the light received from the exposure units in a direction and pattern associated with the image data. In some examples, the waveguide comprises a plurality of internal surfaces that form a light guide to internally reflect the light as the light propagates from an input to an output. The waveguide includes, for example, a grid at the output to bend the light toward an eye of the user and thereby display the image to the user. Further, in some examples, the waveguide comprises a first and a second lens arranged to be placed over, respectively, a first eye and a second eye of the user. However, any suitable shape or size is possible for such a waveguide. In some examples, the waveguide is transparent such that the user can see the environment simultaneously with the displayed image, or only the environment when no image is displayed on the waveguide.
BE2018 / 5781
Although the example image generator 106 uses the light source 108 and the screen / optic 110 to present visual components of the presentation, the example HUD assembly 100 of FIG. 1 use any suitable image generation technology such as, for example, cathode ray tube (CRT) devices or scanning lasers.
The exemplary presentation generator 102 of FIG. 1 includes an audio generator 112 that receives audio data and converts the audio data to sound through a headphone jack 114 and / or a speaker 116. For example, the audio generator 112 may generate a sound to indicate that a nearby image recording device requests a image data request from an object has accepted and / or that the nearby image recording device has obtained the requested image data. In some examples, the audio generator 112 and the image generator 106 work together to generate an audiovisual presentation.
In the example of FIG. 1, the example presentation generator 102 includes (e.g., accommodates) a plurality of sensors 118. In the example of FIG. 1, the plurality of sensors 118 includes a light sensor 120, a motion sensor 122 (e.g., an accelerometer), a gyroscope 124, a microphone 126, and a proximity sensor 127. In some examples, the presentation generated by the example image generator 106 and / or the audio generator 112 affected by one or more measurements and / or detections generated by one or more sensors 118. A characteristic (e.g., degree of opacity) of the screen generated by the image data generator 106 may depend, for example, on an intensity of ambient light that is detected by the light sensor 120. Additionally or alternatively, one or more modes, operational parameters, or settings are determined by measurements and / or detections generated by one or more of the sensors 118. The
For example, BE2018 / 5781 presentation generator 102 may enter a standby mode if the motion sensor 122 has not detected motion in a threshold amount of time.
In the illustrated example, the proximity sensor 127 is configured to provide, for example, a server and / or other imaging devices located within a range of the HUD assembly 100 with location and / or motion information associated with the HUD assembly 100. In some examples, the proximity sensor 127 coordinates with other proximity sensors carried by the other image capture devices to determine if one or more of the image capture devices are within a threshold distance of each other (i.e., are adjacent to each other). The proximity sensor 127 may, for example, try to connect to nearby devices (pair) (e.g. via a Bluetooth® communication device) that are also equipped with similar (e.g. work according to the same communication protocol) sensors. In some examples, the HUD assembly 100 and other imaging devices are locatable using an RFID-based localization system via the proximity sensor 127 and other similar sensors carried by the other imaging devices. The proximity sensor 127 can be configured, for example, to send radio frequency signals that are read by fixed RFID readers capable of locating the HUD assembly 100 on bases of the signals (e.g. via triangulation techniques). In some examples, the proximity sensor 127 is a satellite-based sensor capable of providing location information based on, for example, a GPS system that is also aware of the locations of the other image capture devices. In such cases, the locations may be different
BE2018 / 5781 image capture devices are compared to determine if any of the image capture devices are in close proximity to each other. In some examples, the proximity sensor 127 provides motion and / or positional information associated with the HUD assembly 100 such as, for example, stamp (pitch), roll, yaw (yaw), elevation and bearing information. In some examples, the proximity sensor 127 defines a local geometry or coordinate system and the associated location on the coordinate system associated with the HUD assembly 100. For example, when the HUD assembly 100 is initialized (e.g., turned on), the proximity sensor 127 may log the start location of the HUD assembly as 0, 0, 0 in the coordinate system. In such cases, the proximity sensor 127 updates the location of the HUD assembly 100 as the user moves. Further, in such cases, other image capture devices have locations in the coordinate system (e.g., fixed locations for static image capture devices and updated locations for mobile image capture devices such as other HUD assemblies or handheld mobile computer devices). As such, the image capture devices within a threshold distance are considered close devices according to the coordinate system. In some examples, the proximity sensor 127 uses strength of signal (e.g., WiFi signals) systems to locate the HUD assembly 100 and / or other image capture devices relative to each other and / or a coordinate system.
As described in detail herein, data provided by the proximity sensor 127 is used to make use of nearby image recording devices to obtain additional image data representative of an object that is dimensioned by the HUD assembly 100 (ie, the first image capture device).
BE2018 / 5781
The exemplary presentation generator 102 of FIG. 1 includes a camera subsystem 128. In some examples, the camera subsystem is attached to or is carried by the same housing as the presentation generator 102. In some examples, the camera subsystem 128 is attached to or is carried by the main mount 104. The example camera subsystem 128 includes two cameras 130 and a microphone 132 for, respectively, recording image data and audio data representative of an environment surrounding the HUD assembly 100. In some examples, the camera subsystem 128 includes one or more depth sensors to detect distances between objects in a field of view and the HUD assembly 100. In some examples, the image and / or audio data recorded by the cameras 130 and / or microphone 132 are integrated with the presentation generated by the image generator 106 and / or the audio generator 112. The camera subsystem 128 of FIG. 1, for example, communicates data to the image generator 106, which can process the image data to generate one or more associated images on the screen / optic 110. In some examples, the image data and / or audio data recorded by, respectively, the cameras 130 and / or the microphone 132 are stored in memory 135 of the example HUD assembly 100. In some examples, the image data and / or or audio data recorded by, respectively, the cameras 130 and / or the microphone 132, communicated via, for example, a USB interface 134 from the camera subsystem 128 to a device (e.g., a server or an external memory) external to the HUD assembly 100.
The exemplary presentation generator 102 of FIG. 1 includes a plurality of interfaces 136 that are configured to allow the HUD assembly 100 to communicate with one or more external devices 142 and one or more networks 138.
BE2018 / 5781 example of FIG. 1, the interfaces 136 include converters 140 (e.g., an HDMI to LVDS-RGB converter) to convert data from one format to another, a USB interface 144 and a Bluetooth® audio transmitter 146. In some examples, the exemplary Bluetooth® audio transmitter 146 together with one or both of the microphones 126, 132, of the HUD assembly 100 to receive voice input from the user and to transfer the voice input to one or more external devices 142. For example, voice input can be provided to a mobile computing device that is carried by the user via the HUD assembly 100 using the Bluetooth® audio transmitter 146. Examples of external devices 142 include keypads, Bluetooth® click buttons, smart watches, and mobile computer devices.
The sample image generator 106, the sample light source 108, the sample audio generator 112, the sample camera subsystem 128, the sample converters 140, the sample USB interfaces 134, 144, and / or more generally, the sample presentation generator 102 of FIG. 1 are implemented by hardware, software, firmware, and / or any combination of hardware, software, and / or firmware. In some examples, at least one of the example image generator 106, the example light source 108, the example audio generator 112, the example camera subsystem 128, the example converters 140, the example USB interfaces 134, 144 and / or more generally, the example presentation generator 102 of FIG. 1 implemented by means of a logic circuit. As used herein, the term "logic circuit" is expressly defined as a physical device that includes at least one hardware component that is configured (e.g., through operation in accordance with a predetermined configuration and / or through execution of stored machine-readable instructions ) to
BE2018 / 5781 to operate one or more machines and / or perform operations on one or more machines. Examples of a logic circuit include one or more processors, one or more coprocessors, one or more microprocessor, one or more controllers, one or more digital signal processors (DSPs), one or more application-specific integrated circuits (ASICs), one or more multiple field-programmable gate arrays (FPGAs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices. Some exemplary logic circuits, such as ASICs or FPGAs, are specifically configured hardware for performing operations. Some exemplary logic circuits are hardware that executes machine-readable instructions to perform operations. Some exemplary logic circuits include a combination of specifically configured hardware and hardware that execute machine-readable instructions.
As used herein, each of the terms "tactile machine-readable medium", "non-transitory machine-readable medium", and "machine-readable storage device" is explicitly defined as a storage medium (e.g., a hard disk plate) , a digital widely applicable disc, a compact disc, flash memory, read-only memory, random access memory, etc.) on which machine-readable instructions (for example, program code in the form of, for example, software and / or hardware) can being saved. Further, as used herein, each of the terms "tactile machine-readable medium", "non-transitory machine-readable medium" and "machine-readable storage device" is expressly defined to exclude signal propagation. That is, as used in every claim of this patent, a "tactile machine-readable medium" cannot be read as being implemented
BE2018 / 5781 by propagating a signal. Furthermore, as used in any claim of this patent, a "non-transit machine readable medium" cannot be read as being implemented by propagating a signal. Furthermore, as used in any claim of this patent, a "machine-readable storage device" cannot be read as being implemented by propagating a signal.
As used herein, each of the terms "tactile machine-readable medium", "non-transitory machine-readable medium" and "machine-readable storage device" is expressly defined as a storage medium on which machine-readable instructions are stored for any suitable length of time (e.g. permanently, for a longer period of time (e.g. while a program associated with the machine-readable instructions is being executed)) and / or a short period of time (e.g. while machine-readable instructions are being cached and / or buffered during a buffer process)).
FIGS. 2A and 2B illustrate an exemplary HUD assembly 200 that the exemplary HUD assembly 100 of FIG. 1 can implement. The exemplary HUD assembly 200 of FIG. 2B includes a presentation generator 202 and an exemplary main mount 204. The exemplary presentation generator 202 of FIG. 2B accommodates or carries components that are configured to, for example, generate an audiovisual presentation for consumption by a user using the exemplary HUD assembly 200 of FIG. 2B. The presentation generator 202 of FIG. 2B, for example, accommodates or carries the components of the exemplary presentation generator 102 of FIG. 1.
FIG. 3 illustrates the exemplary HUD assembly 200 of FIGs. 2A and 2B attached to a user's head 300.
BE2018 / 5781
FIG. 4 illustrates exemplary cameras 402 which, for example, the cameras 130 of FIG. 1 implement. As described above, the cameras 402 can be configured to record image data representative of a box object and the hands of a user when a user carrying the HUD assembly 200 looks at a box object. Although the example cameras 402 of FIG. 4 are positioned above each eyepiece, the cameras can be positioned at any suitable location, such as, for example, on the edges of the frames. FIG. For example, 5A illustrates a user wearing the HUD assembly 200 and looking at a box object, with example cameras 502 attached to the sides of the main mount, recording image data comprising the box object and the user's hands.
FIG. 5B illustrates multiple users carrying the sample HUD assemblies to which cameras are attached, each user looking at a sample box object to be dimensioned. As shown in FIG. 5B, a first user who carries the HUD assembly 200 holds the box object, while a second user who carries the example HUD assembly 510 looks at the same box object. When the users are facing the same object, the cameras (not shown) attached to their HUD assemblies 200, 510 are each positioned to receive image data from the box object from multiple perspectives.
FIG. 50 illustrates a sample box object to be dimensioned within the field of view of a mounted sample camera, and a user who carries a sample HUD assembly to which cameras are attached, the user looking at the sample box object to be dimensioned. As shown in FIG. 5 C, a user carrying the example HUD assembly holds the box object, while a camera 520 (e.g., a fixed
BE2018 / 5781 overhead camera) is aimed at the same box object. Accordingly, the cameras (not shown) of the HUD assembly 200 and the attached camera 520 are each positioned to record image data from the box object from multiple perspectives.
FIG. 6 is a block diagram representative of an exemplary logic circuit that can be used to, for example, the example image generator 106, the example power source 108, one or more example interfaces 136 and / or the example audio generator 112 of FIG. 1 to implement. The exemplary logic circuit of FIG. 6 is a processing platform 600 capable of executing machine-readable instructions to, for example, operations associated with the example HUD assembly 100 of FIG. 1 to implement.
The exemplary processing platform 600 of FIG. 6 comprises a processor 602 such as, for example, one or more microprocessors, controllers and / or any suitable type of processor. The exemplary processing platform 600 of FIG. 6 includes memory (e.g., volatile memory, non-volatile memory) accessible to the processor 602 (e.g., via a memory controller). The exemplary processor 602 interacts with the memory 604 to obtain, for example, machine-readable instructions stored in the memory 604. Additionally or alternatively, the machine-readable instructions may be stored on one or more removable media (e.g., a compact disc, a digital versatile disc, removable flash memory, etc.) that may be coupled to the processing platform 600 to provide access to the machine-readable instructions stored thereon. In particular, the machine-readable instructions stored on the memory 604 may include execution instructions
BE2018 / 5781 of one of the methods described in greater detail below with FIG. 7.
The exemplary processing platform 600 of FIG. 6 further comprises a network interface 606 to enable communication with other machines, via, for example, one or more networks. The exemplary network interfaces 606 include any suitable type of communication interface (s) (e.g., wired and / or wireless interfaces) that are configured to operate in accordance with any suitable protocol. The exemplary processing platform 600 of FIG. 6 includes input / output (I / O) interfaces 608 to enable receipt of user input and communication of output data to the user.
The exemplary processing platform 600 of FIG. 6 can be configured to perform dimensioning operations using the image data recorded via examples described herein. Any suitable technique for measuring dimensions of the object is applicable to examples disclosed herein. Methods and devices for dimensioning a box object using image data included, for example, by the HUD assembly 100 and / or additional image recording devices are disclosed, for example, in U.S. Pat. Patent No. 9,741,134 filed December 16, 2013. Additional or alternative methods and devices that may be used in connection with examples disclosed herein for dimensioning an object include point cloud generators and point cloud data analysis to measure objects.
FIG. 7 is a flowchart representing an exemplary method in accordance with the teachings of this description. Although the example of FIG. 7 is described in connection with the example 22
BE2018 / 5781
HUD assembly 100 of FIG. 1, the example of FIG. 7 are implemented in connection with additional or alternative types of image capture devices, such as handheld mobile computer devices that have image capture supplies.
At block 700, the HUD assembly 100 and / or other components of a dimensioning system (e.g., other mobile imaging devices, such as additional HUD assemblies, and / or fixed location imaging devices, such as cameras mounted in a dimensioning space) are initiating ( for example turned on). At block 702, the initiation includes defining a local geometry for use in the proximity determinations as described herein. For example, a coordinate system can be initialized at a starting point (0.0.0).
At block 704, the position of the HUD assembly 100 is determined (e.g., via the proximity sensor 127) and continuously updated as the HUD assembly moves. In the illustrated example, a plurality of position and motion information associated with the HUD assembly 100 is collected, such as, for example, stamp (pitch), roll, yaw (yaw), elevation, and heading information. In some examples, determining the position of the HUD assembly 100 includes calibrating the HUD assembly, for example as described by U.S. Pat. Pat. No. 9,952,432.
If a sizing request has been received (block 706) (e.g. triggered by an input from a user, or by a detection sensor positioning that an object is currently in a target location or within a target range for sizing), the image capture equipment takes from the HUD assembly 100 image data (for example, two-dimensional and / or three-dimensional data) that are representative of the field of view of the HUD assembly (block
BE2018 / 5781
708), which includes an object to be dimensioned (for example, a box). In the example of FIG. 7, the recorded image data is provided with a time stamp using a reference clock that is accessible to other image capture devices in the environment.
At block 710, image capture devices near the HUD assembly, as determined by the proximity sensor 127 and / or processing components in communication with the proximity sensor 127 and similar proximity sensors of other image capture devices, are identified.
In one example, a processor, for example as part of a real-time location system (RTLS), receives data indicative of the current position of the HUD assembly 100, as well as data indicative of the current position of at least one secondary image capture device. Secondary image capture devices may include, for example, other mobile image capture devices, such as additional HUD assemblies, and / or fixed-location image capture devices, such as cameras mounted in a dimensioning zone. In some examples, the processor receives data indicative of the current positions of a large number of secondary image capture devices. Based on the position data of the HUD assembly 100, and the various secondary image capture devices, the processor calculates the distance between the HUD assembly 100 and each of the secondary image capture devices to determine whether one of the secondary image capture devices is within a proximity distance threshold value (e.g., five foot, ten feet, 100 feet, etc.) is of the HUD assembly 100.
As an additional or alternative example, the various secondary image capture devices transmit short-range wireless signals (e.g., Bluetooth signals) that are detectable by
BE2018 / 5781 sensors from the HUD assembly (for example proximity sensors). Based on a known range of such a signal, it is determined that a secondary image recording device is within a proximity distance threshold value of the HUD assembly when the HUD assembly receives the signal. Conversely, it is determined that a secondary image pickup device is outside of a proximity distance threshold value when the HUD assembly fails to receive the signal.
Further, the HUD assembly 100 and / or a processing component in communication with the HUD assembly 100 (e.g., a server) sends a request to each of the secondary image capture devices identified as near the HUD assembly 100. The request indicates that the HUD assembly 100 is dimensioning an object at the location of the HUD assembly 100 and requesting additional image data for the object at that location. In some examples, the secondary image capture devices are fixed image capture devices that are focused on a specific point. In such cases, the image capture devices may display the object when the object is in the field of view. Alternatively, when the other imaging devices are mobile devices that have a dynamic field of vision, the image capture devices may determine (e.g., based on their course, position, stamp (pitch), roll and vulture (yaw)) when the object is in the field of view , and can display the object when the object is in the dynamic field of view.
At block 712, the nearby secondary image recording device (s) records image data representative of the object from different perspectives than the perspective of the HUD assembly 100. As shown in FIG. 5B, is a first HUD assembly
BE2018 / 5781 for example opposite the object from a first angle, and takes image data from one perspective, while a secondary HUD assembly in the vicinity of the first HUD assembly (namely within a threshold distance) faces the object from a second angle, and records image data from a different perspective. In the same way, as shown in FIG. 5C, a HUD assembly faces the object from a first angle, and records image data from one perspective, while a mounted camera in the vicinity of the first HUD assembly faces the object from a second angle and records image data from a different perspective. The recorded image data is time stamped using the same reference as the primary image capture device and is provided to a database.
At block 714, a dimensioning processor (e.g., a server in communication with the HUD assembly 100 or a processor of the HUD assembly 100) receives the image data recorded by the HUD assembly 100 and any nearby image recording device. The dimensioning processor associates image data from the different sources according to common (e.g. within a threshold amount of time) time stamps.
At block 716, the dimensioning processor combines different cases of image data from the different sources to form a composite representation of the object. In the illustrated example, the dimensioning processor generates a combined point cloud that contains the various cases of image data. However, in alternative examples when the image data is not point cloud data, the dimensioning processor combines the image data in alternative ways to form a composite representation of the object.
BE2018 / 5781
At block 718, the dimensioning processor segments the point cloud from other data (e.g., background data) and calculates one or more dimensions of the object based on the segmented point cloud. In some examples, the dimensioning processor assigns a trust score to the one or more dimensions. In some examples, the dimensioning processor communicates the one or more dimensions to the HUD assembly 100 and the results are displayed on the screen thereon.
At block 720, the one or more dimensions are reported and / or stored. If the dimensioning process for the object is complete (block 722), control proceeds to block 724. Otherwise, the dimensioning process for the object continues.
At block 724, if the system is off, the process is abandoned (block 726). Otherwise, control, from block 724, can return to block 706 to determine if another sizing request has been made.
For the sake of clarity and a brief description, measures are described herein as part of the same or separate embodiments. However, it is noted that the scope of the invention may include embodiments that have combinations of all or some of the features described herein. Embodiments shown may include the same or equivalent components, unless otherwise specified. The benefits, solutions to problems and any element that can lead to any benefit, or solution that takes place or is emphasized more, should not be interpreted as a critical, necessary, or essential measure or element of any of all claims.
In addition, in this document, relative terms such as first and second, top and bottom, and the like can only be used to distinguish one entity or action from another entity
BE2018 / 5781 or promotion without necessarily involving any of these actual relationships or orders between such entities or promotions. The terms "includes", "comprising", "" has "," having "," contains "," containing "or any other variation thereof are intended to cover a non-exclusive inclusion such that a process, method, article , or device that includes a list of elements includes not only those elements, but may also include other elements that are not expressly stated or inherent in such a process, method, article, or device. An element preceded by "includes ... a", "has ... a", "contains ... a" does not exclude, without limitation, the existence of additional identical elements in the process, method, article , or the device comprising the element. The term "one" is defined as one or more, unless explicitly stated otherwise herein. The terms "substantially", "essential", "approximately", or any other version thereof, are defined as being close by, as understood by those skilled in the art, and in one non-limiting embodiment, the term is defined as being within 10%, in another embodiment as being within 5%, in another embodiment as being within 1% and in another embodiment as being within 0.5%. The term "coupled" as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is "configured" in a certain way is configured in at least that way, but can also be configured in ways that are not specified.
Furthermore, some embodiments may include one or more generic or specialized processors (or "processing devices"), such as microprocessors, digital signal processors, customized processors, and field-programmable gate arrays (FPGAs) and unique
BE2018 / 5781 stored program instructions (including both software and hardware) that manage the one or more processors, in combination with certain non-processor circuits, to implement some, most, or all functions of the method and / or device contained herein has been described. Alternatively, some or all of the functions could be implemented by a state machine that has no stored program instructions, or in one or more application-specific integrated circuits (ASICs), in which each function or some combinations of certain functions are implemented as custom logic ( tailored). Of course, a combination of the two approaches could be used.
In addition, an embodiment may be implemented as a computer-readable storage medium that has stored thereon a computer-readable code for programming a computer (e.g., including a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage media include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (read-only memory), a PROM (programmable read-only memory) ), an EPROM (erasable programmable read-only memory), an EEPROM (electrically erasable programmable read-only memory) and a flash memory. Furthermore, despite potentially significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, it is expected that those skilled in the art will easily be able to generate such software instructions and programs and ICs with minimal experimentation.
BE2018 / 5781
The summary description is provided to enable the reader to quickly find out the nature of the technical description. It is submitted on the assumption that it will not be used to interpret the claims or to limit their scope. The mere fact that certain measures are mentioned in mutually different conclusions does not indicate that a combination of these measures cannot be used for an advantage. A large number of variants will be clear to the skilled person. All variants are considered to be included within the scope of the invention as defined in the following claims.
权利要求:
Claims (20)
[1]
CONCLUSIONS
A computer-implemented method for using nearby image recording apparatus for recording image data representative of an object from multiple perspectives, the method comprising:
recording, by a first image capture device, first image data representative of an object from a first perspective;
determining, by a processor, whether a second image capture device is in the vicinity of the first image capture device; and when the second image capture device is in the vicinity of the first image capture device, sending a request to the second image capture device for second image data representative of the object from a second perspective, the first image data and the second image data being combinable to form a composite to represent the object.
[2]
The computer-implemented method of claim 1, wherein the first image capture device is a heads-up display assembly.
[3]
The computer-implemented method according to claim 1 or 2, wherein the first image capture device is a first heads-up display assembly associated with a first user, and the second image capture device is a second heads-up display assembly associated with a second user.
[4]
A computer-implemented method according to any of the preceding claims, wherein the first image capture device is a mobile device associated with a first user and the second image capture device is a stationary device.
BE2018 / 5781
[5]
The computer-implemented method according to any of the preceding claims, further comprising:
receiving a user request to record image data representative of the object; and activating (triggering) the first image capture device to record first image data based on the user request.
[6]
The computer-implemented method according to any of the preceding claims, further comprising:
detecting, by a sensor, that the object is currently in a target location or within a target range to dimension; and activating, by the processor, the first image capture device to record the first image data based on that the object is in the target location or the target range.
[7]
A computer-implemented method according to any one of the preceding claims, wherein determining whether the second image recording device is in the vicinity of the first image recording device comprises:
receiving, by a processor, from the first image capture device, data indicative of a position of the first image capture device;
receiving, by a processor, from the second image capture device, data indicative of a position of the second image capture device;
calculating, by a processor based on the position of the first image capture device and the position of the second image capture device, a distance between the first image capture device and the second image capture device; and
BE2018 / 5781 determining, by a processor, whether the calculated distance between the first image capture device and second image capture device is within a proximity distance threshold value.
[8]
A computer-implemented method according to any one of the preceding claims, wherein determining whether second image capture device is in the vicinity of first image capture device:
receiving, by a processor, a fixed RFID reader, an indication that the fixed RFID reader has received an RF blink sent by the first image capture device;
determining, by a processor, based on the indication of the fixed RFID reader, a position of the first image recording apparatus;
receiving, by a processor, from the fixed RFID reader, an indication that the fixed RFID reader has received an RF signal sent by the second imaging device;
determining, by a processor, based on the indication of the fixed RFID reader, the position of the second image capture device;
calculating, by a processor based on the position of the first image capture device and the position of the second image capture device, a distance between the first image capture device and the second image capture device; and determining, by a processor, whether the calculated distance between the first image capture device and the second image capture device is within a proximity distance threshold.
[9]
The computer-implemented method of any one of the preceding claims, wherein determining whether the second image capture device is in the vicinity of the first image capture device comprises:
BE2018 / 5781 detecting, by the first image recording device, a short range (short range) wireless signal sent by the second image recording device; and determining, based on the detected short-range wireless signal, that the second image capture device is in the vicinity of the first image capture device.
[10]
The computer-implemented method according to any of the preceding claims, further comprising:
combining the first image data and the second image data to create a three-dimensional point cloud (rendering); and dimensioning the object using the three-dimensional point cloud.
[11]
A system for using nearby image recording apparatus for recording image data representative of an object from multiple perspectives, the system comprising:
a plurality of image capture devices;
memory configured to store computer-executable instructions; and a processor configured to interact with the plurality of image capture devices and the memory, and configured to execute computer executable instructions to turn the processor on:
recording, by a first image capture device, first image data representative of an object from a first perspective;
determining whether a second image capture device is in the vicinity of the first image capture device; and when the second image capture device is in the vicinity of the first image capture device, sending one
BE2018 / 5781 requests the second image capture device for second image data representative of the object from a second perspective, the first image data and the second image data being combinable to form a composite representation of the object.
[12]
The system of claim 11, wherein the first image capture device is a heads-up display assembly.
[13]
The system of claim 11 or 12, wherein the first image capture device is a first heads-up display assembly associated with a first user and the second image capture device is a second heads-up display assembly associated with a second user.
[14]
The system of any one of the preceding claims 11-13, wherein the first image capture device is a mobile device, and the second image capture device is a stationary device.
[15]
The system of any one of the preceding claims 11-14, wherein the computer-executable instructions, when executed, cause the processor to:
receiving a user request to record image data representative of the object; and activating the first image recording device to record the first image data based on the user request.
[16]
The system of any one of the preceding claims 11-15, wherein the processor is configured to interact with a sensor, and wherein the computer-executable instructions, when executed, turn on the processor to:
detecting, by the sensor, that the object is currently in a target location or within target range to dimension; and
BE2018 / 5781 activating the first image capture device to record the first image data based on that the object is in the target location or the target range.
[17]
The system of any one of the preceding claims 11-16, wherein the computer-executable instructions, when executed, cause the processor to:
receiving, from a first image capture device, data indicative of a position of the first image capture device;
receiving, from the second image capture device, data indicative of a position of the second image capture device;
calculating, based on the position of the first image capture device and the position of the second image capture device, a distance between the first image capture device and the second image capture device; and determining whether the calculated distance between the first image capture device and the second image capture device is within a proximity distance threshold.
[18]
The system of any one of the preceding claims 11-17, wherein the computer-executable instructions, when executed, cause the processor to:
receiving, from a fixed RFID reader, an indication that the fixed RFID reader has received an RF signal sent by the first image capture device;
determining, based on the indication of the fixed RFID reader, a position of the first image capture device;
receiving, from the fixed RFID reader, an indication that the fixed RFID reader has received an RF signal sent by the second imaging device;
BE2018 / 5781 determining, based on the indication of the fixed RFID reader, the position of the second imaging device;
calculating, based on the position of the first image capture device and the position of the second image capture device, a distance between the first image capture device and the second image capture device; and determining whether the calculated distance between the first image capture device and the second image capture device is within a proximity distance threshold.
[19]
The system of any one of the preceding claims 11-18, wherein the computer-executable instructions prompt the processor to:
detecting, by the first image capture device, a short range wireless signal sent by the second image capture device; and determining, based on the detected short-range wireless signal, that the second image capture device is in the vicinity of the first image capture device.
[20]
The system of any one of the preceding claims 11-19, wherein the computer-executable instructions, when executed, cause the processor to:
combining the first image data and the second image data to form a three-dimensional point cloud; and dimensioning the object using the three-dimensional point cloud.
类似技术:
公开号 | 公开日 | 专利标题
US10240914B2|2019-03-26|Dimensioning system with guided alignment
US9754167B1|2017-09-05|Safety for wearable virtual reality devices via object detection and tracking
KR20160024986A|2016-03-07|Eye tracking via depth camera
US10152634B2|2018-12-11|Methods and systems for contextually processing imagery
BE1025917A9|2019-12-17|METHODS AND DEVICES FOR DIMENSIONING AN OBJECT USING NEAR DEVICES
US20150379770A1|2015-12-31|Digital action in response to object interaction
EP2779092A1|2014-09-17|Apparatus and techniques for determining object depth in images
US9661470B1|2017-05-23|Methods and systems for locating an actor within an environment
US20180053352A1|2018-02-22|Occluding augmented reality content or thermal imagery for simultaneous display
US20200293793A1|2020-09-17|Methods and systems for video surveillance
CN111149357A|2020-05-12|3D 360 degree depth projector
JP2019164842A|2019-09-26|Human body action analysis method, human body action analysis device, equipment, and computer-readable storage medium
US20140175162A1|2014-06-26|Identifying Products As A Consumer Moves Within A Retail Store
US20200326775A1|2020-10-15|Head-mounted display device and operating method of the same
BE1025916B1|2020-02-07|METHODS AND DEVICES FOR QUICK DIMENSIONING AN OBJECT
KR101544957B1|2015-08-19|Laser-guided head mounted display for augmented reality and method thereof
US20170178107A1|2017-06-22|Information processing apparatus, information processing method, recording medium and pos terminal apparatus
US10484656B1|2019-11-19|Driver system resonant frequency determination
US10634913B2|2020-04-28|Systems and methods for task-based adjustable focal distance for heads-up displays
KR20210150881A|2021-12-13|Electronic apparatus and operaintg method thereof
EP3462128B1|2021-11-10|3d 360-degree camera system
US20210373658A1|2021-12-02|Electronic device and operating method thereof
KR20210147837A|2021-12-07|Electronic apparatus and operaintg method thereof
US20210003391A1|2021-01-07|Object detection device, object detection system, object detection method, and non-transitory computer-readable medium storing program
Druml et al.2020|A Smartphone-Based Virtual White Cane Prototype Featuring Time-of-Flight 3D Imaging
同族专利:
公开号 | 公开日
BE1025917A1|2019-08-07|
DE112018005351T5|2020-06-25|
GB202005391D0|2020-05-27|
US11146775B2|2021-10-12|
WO2019094125A1|2019-05-16|
GB2581060A|2020-08-05|
CN111316059B|2021-12-07|
US20190141312A1|2019-05-09|
BE1025917B1|2020-02-07|
CN111316059A|2020-06-19|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP3324295B2|1994-09-26|2002-09-17|日産自動車株式会社|Gaze direction measuring device for vehicles|
US7916895B2|2007-05-07|2011-03-29|Harris Corporation|Systems and methods for improved target tracking for tactical imaging|
US8253538B1|2008-05-29|2012-08-28|Marvell International Ltd.|Asset management using mobile radio-frequency identification readers|
US20110190972A1|2010-02-02|2011-08-04|Gm Global Technology Operations, Inc.|Grid unlock|
EP2420854B1|2010-08-17|2014-04-09|BlackBerry Limited|Tagging a location by pairing devices|
US20120190403A1|2011-01-26|2012-07-26|Research In Motion Limited|Apparatus and method for synchronizing media capture in a wireless device|
GB201208088D0|2012-05-09|2012-06-20|Ncam Sollutions Ltd|Ncam|
US20140160157A1|2012-12-11|2014-06-12|Adam G. Poulos|People-triggered holographic reminders|
NZ708121A|2013-02-13|2017-08-25|Philip Morris Products Sa|Evaluating porosity distribution within a porous rod|
US9741134B2|2013-12-16|2017-08-22|Symbol Technologies, Llc|Method and apparatus for dimensioning box object|
US10003786B2|2015-09-25|2018-06-19|Intel Corporation|Method and system of 3D image capture with dynamic cameras|
US9952432B2|2016-04-08|2018-04-24|Symbol Technologies, Llc|Arrangement for, and method of, calibrating a wearable apparatus to electro-optically read targets|
US10484599B2|2016-10-25|2019-11-19|Microsoft Technology Licensing, Llc|Simulating depth of field|
WO2018098304A1|2016-11-22|2018-05-31|Ovesny Erik D|System and method for location-based image capture between mobile devices|
US10455364B2|2016-12-12|2019-10-22|Position Imaging, Inc.|System and method of personalized navigation inside a business enterprise|
US10818188B2|2016-12-13|2020-10-27|Direct Current Capital LLC|Method for dispatching a vehicle to a user's location|
US20180220125A1|2017-01-31|2018-08-02|Tetavi Ltd.|System and method for rendering free viewpoint video for sport applications|
US10750083B2|2018-04-06|2020-08-18|Motorola Solutions, Inc.|Systems and methods for processing digital image data representing multiple views of an object of interest|CN109460690A|2017-09-01|2019-03-12|虹软多媒体信息技术有限公司|A kind of method and apparatus for pattern-recognition|
US10930001B2|2018-05-29|2021-02-23|Zebra Technologies Corporation|Data capture system and method for object dimensioning|
法律状态:
2020-04-02| FG| Patent granted|Effective date: 20200207 |
优先权:
申请号 | 申请日 | 专利标题
US201762582908P| true| 2017-11-07|2017-11-07|
US15/986,136|US11146775B2|2017-11-07|2018-05-22|Methods and apparatus for dimensioning an object using proximate devices|
[返回顶部]